翻訳と辞書
Words near each other
・ Error-correcting codes with feedback
・ Error-driven learning
・ Error-related negativity
・ Error-tolerant design
・ Errored second
・ Errorless learning
・ Errors (band)
・ Errors and Expectations
・ Errors and residuals
・ Errors in Calculating Odds, Errors in Calculating Value
・ Errors in early word use
・ Errors of impunity
・ Errors of the Human Body
・ Errors of Youth
・ Errors, freaks, and oddities
Errors-in-variables models
・ Errouville
・ Errr-Magazine
・ Errtime
・ Errtu
・ Errum Manzil
・ Errwood Hall
・ Errwood Reservoir
・ Errázuriz family
・ Erró
・ ERS
・ Ers
・ ERS 3500 and ERS 2500 series
・ ERS10
・ Ersa


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Errors-in-variables models : ウィキペディア英語版
Errors-in-variables models

In statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses.
In the case when some regressors have been measured with errors, estimation based on the standard assumption leads to inconsistent estimates, meaning that the parameter estimates do not tend to the true values even in very large samples. For simple linear regression the effect is an underestimate of the coefficient, known as the ''attenuation bias''. In non-linear models the direction of the bias is likely to be more complicated.
== Motivational example ==
Consider a simple linear regression model of the form
:
y_ = \alpha + \beta x_^ + \varepsilon_t\,, \quad t=1,\ldots,T,

where x_^ denotes the ''true'' but unobserved regressor. Instead we observe this value with an error:
:
x_ = x_^ + \eta_\,

where the measurement error \eta_ is assumed to be independent from the true value x_^.
If the y_′s are simply regressed on the x_′s (see simple linear regression), then the estimator for the slope coefficient is
:
\hat = \frac\sum_^T(x_t-\bar)(y_t-\bar)}
\sum_^T(x_t-\bar)^2}\,,

which converges as the sample size T increases without bound:
:
\hat \xrightarrow
\frac
= \frac^2 + \sigma_\eta^2}
= \frac \,.

Variances are non-negative, so that in the limit the estimate is smaller in magnitude than the true value of \beta, an effect which statisticians call ''attenuation'' or regression dilution. Thus the ‘naїve’ least squares estimator is inconsistent in this setting. However, the estimator is a consistent estimator of the parameter required for a best linear predictor of y given x: in some applications this may be what is required, rather than an estimate of the ‘true’ regression coefficient, although that would assume that the variance of the errors in observing x^ remains fixed. This follows directly from the result quoted immediately above, and the fact that the regression coefficient relating the y_′s to the actually observed x_′s, in a simple linear regression, is given by
:
\beta_x = \frac .

It is this coefficient, rather than \beta, that would be required for constructing a predictor of y based on an observed x which is subject to noise.
It can be argued that almost all existing data sets contain errors of different nature and magnitude, so that attenuation bias is extremely frequent (although in multivariate regression the direction of bias is ambiguous. Jerry Hausman sees this as an ''iron law of econometrics'': "The magnitude of the estimate is usually smaller than expected."

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Errors-in-variables models」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.